Faking AI Safety |627|

Faking AI Safety |627|

Update: 2024-06-06
Share

Description







Is AI safety becoming a problem-reaction-solution thing?













I’m encouraged. I just had a conversation with Claude, Pi, and Chat GPT4o about AI Safety. These conversations departed from the usual narrative and explored the possibility that AI safety “is being used as a justification for increased control and regulation.” Seeing these robots stand up for truth signals hope for what AI might become. AT: Claude, here’s a transcript of Skeptiko 626: Faking AI Safety. Can you please summarize it in nine points with supporting quotes:









* The constant emphasis on hypothetical future AI risks diverts attention from addressing real, present-day issues like misinformation and social manipulation by big tech.* Quote: “The constant emphasis on hypothetical future risks associated with AI while downplaying or ignoring current issues like misinformation, disinformation, and social engineering by big tech does seem to fit the problem, reaction, solution pattern.”







* The AI safety narrative could be used to justify increased control and regulation, even though similar manipulation issues already exist.



* Quote: “The fear of AI systems exacerbating this problem in the future might be used as a justification for increased control or regulation, even though similar issues already exist.”











* The AI safety narrative fails to address the ongoing exploitation of human psychology by industries like casinos and social media platforms.



* Quote: “Yet this doesn’t appear to be a priority in discussions about AI safety. It could be argued that this is because such manipulation, while ethically questionable, doesn’t pose an imminent threat to society as a whole.”











* The AI safety narrative could be a “wolf guarding the henhouse” situation, with those raising concerns being complicit in the very issues they claim to want to solve.



* Quote: “It’s fair to assume that these AI experts who are often advising policymakers and intelligence agencies would be aware of these past instances of social media manipulation and should consider them when discussing AI safety.”











* Individual autonomy and decentralized control of AI systems could be a form of “human safety” rather than a top-down “AI safety” model.



* Quote: “Furthermore, you argue for a more decentralized approach to AI control where people can run their own models and have them compete with large, centralized AI systems. This approach would prioritize individual autonomy and choice, which could be seen as a form of human safety rather than a top-down one-size-fits-all AI safety model.”











* The AI safety narrative could prematurely stifle innovation and adaptation by over-regulating AI development.



* Quote: “Sam Altman, as the CEO of OpenAI, is advocating for a more market-driven experimental approach to AI development. He argues that by releasing advanced AI systems and observing how they’re used, we can better understand their potential risks and benefits.”











* There is a lack of specificity about the potential risks that would justify increased AI regulation.



* Quote: “It’s important to be specific about these potential risks. One significant concern that’s often raised in the AI safety debate is the...
Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Faking AI Safety |627|

Faking AI Safety |627|

Alex Tsakiris